News

Humans bring gender bias to interactions with AI

17 Nov 2025

A study reveals that humans project gendered expectations on to AI – and treat ‘female’ and ‘male’ systems differently.

According to a new study by researchers from LMU and Trinity College Dublin, humans bring gender bias to their interactions with artificial intelligence (AI). Recently published in the journal iScience, these findings have important implications for the design, use, and regulation of interactive AI systems.

The study involving over 400 participants found that people exploited female-labeled AI and distrusted male-labeled AI to a comparable extent as they do human partners bearing the same gender labels. In the case of female-labeled AI, moreover, the study found that exploitation in the human-AI setting was even more prevalent than in the case of interaction with human partners.

Illustrative representation of the interaction between humans and “male” or “female” AI agents

The study shows that people exploit female-labeled AI to a comparable extent as human women and distrust male-labeled AI to a similar extent as human men.

© ChatGPT + Jurgis Karpus & Taha Yasseri

This is the first study to examine the role of machine gender in human-AI collaboration using a systematic, empirical approach.

Participants in the study played repeated rounds of the Prisoner’s Dilemma – a classic experiment in behavioral game theory and economics, whereby players have to decide whether to behave cooperatively or exploit the other person. To investigate whether trust and willingness to cooperate are also influenced by gender in AI contexts, the partners in the game were labeled human or AI and further labeled male, female, non-binary, or gender-neutral.

AI design without biases?

Dr. Jurgis Karpus

Jurgis Karpus is postdoctoral researcher at the Chair of Philosophy of Mind | © Jurgis Karpus

The findings show that gendered expectations from human-human settings extend to human-AI cooperation. According to the authors, this has significant implications for how organizations should design, deploy, and regulate interactive AI systems.

“This study raises an important dilemma,” says Dr. Jurgis Karpus, co-author of the study and postdoctoral researcher at the Chair of Philosophy of Mind at LMU. “Giving AI agents human-like features can foster cooperation between people and AI, but it also risks transferring and reinforcing unwelcome existing gender biases from people’s interactions with fellow humans.”

As AI becomes part of everyday life, note the authors, it is important to carefully consider gender representation in AI design – for example, to maximize people’s engagement and build trust in their interactions with automated systems. Designers of interactive AI agents should recognize and mitigate biases in human interactions to prevent the reinforcing of harmful gender discrimination and to create trustworthy, fair, and socially responsible AI systems.

Sepideh Bazazi, Jurgis Karpus & Taha Yasseri: AI’s assigned gender affects human-AI cooperation. iScience 2025

What are you looking for?